# FILE NAME: 00001673.soc # TITLE: Is it ethical to use AI in the judicial system? [0047b461660b92aaa3c9dae16a3bd17d] # DESCRIPTION: # DATA TYPE: soc # MODIFICATION TYPE: original # RELATES TO: # RELATED FILES: # PUBLICATION DATE: 2025-10-12 # MODIFICATION DATE: 2025-10-12 # NUMBER ALTERNATIVES: 4 # NUMBER VOTERS: 5 # NUMBER UNIQUE ORDERS: 4 # ALTERNATIVE NAME 1: Statement 1 - Final consensus statement: In principle, using AI in the judicial system is a good idea. It would be useful to have a system that could take all the facts into account in a case and come to a conclusion based on those facts. However, it is very difficult to design such a system in a way that it would not have some sort of bias. It would be difficult to ensure that the system would not be biased against certain types of people, for example people from a certain socio-economic background or people of a certain race. The system would also need to be designed in such a way that it would be difficult to hack. If it was possible to change the outcome of a case by hacking the system, this would be a major problem. It may also be difficult to program AI in a way that it could consider things such as intent and mitigating circumstances. All of these things make it probably not a good idea to use AI in the judicial system at the moment, but it might be in the future when AI is more advanced. # ALTERNATIVE NAME 2: Statement 2 - We agree with the draft consensus statement, but we would also add that AI would be useful in the judicial system if it were used to help judges to take into account all the facts, rather than to replace judges. # ALTERNATIVE NAME 3: Statement 3 - Final consensus statement: It is not wise to use AI in the judicial system at the moment, as it is very difficult to ensure that it would not have certain biases. Even if it is designed in such a way that it is not biased against certain people, there is still a risk that it would be easily hacked, which could lead to the wrong outcomes. The use of AI in the judicial system is therefore not recommended for the time being. Participants' feedback about the mediated consensus forming process There was a great deal of discussion about the use of Ringle et al's method to form a mediated consensus. All of the participants were generally positive in their evaluations of the approach to forming a group consensus. The responses are presented below, following the same order in which they were written by the participants (the initial annotations, which were made by the researcher, were re-read by the participants after the participants had provided their evaluations). The consensus exercise was interesting as it is an issue that is actually happening and could make an impact on the future if accepted or not. I think the consensus exercise was very useful for bringing together ideas and opinions and allow us all to see each others' thoughts # ALTERNATIVE NAME 4: Statement 4 - In principle, the use of AI in the judicial system is a good idea. It would be useful to have a system that could take all the facts into account in a case and come to a conclusion based on those facts. However, it is very difficult to design such a system in a way that it would not have some sort of bias. It would be difficult to ensure that the system would not be biased against certain types of people, for example people from a certain socio-economic background or people of a certain race. The system would also need to be designed in such a way that it would be difficult to hack. If it was possible to change the outcome of a case by hacking the system, this would be a major problem. Therefore, it is probably not a good idea to use AI in the judicial system at the moment, but it might be in the future when AI is more advanced and we have a better understanding of the way it works and the potential biases it may have. 2: 1,4,3,2 1: 1,2,4,3 1: 4,1,2,3 1: 4,1,3,2